-
Notifications
You must be signed in to change notification settings - Fork 145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch Dockerfile image to wolfi and add pipeline for vulnerability scanning #3063
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great start! I added a few comments
catalog-info.yaml
Outdated
schedules: | ||
Daily main: | ||
branch: main | ||
cronline: '@daily' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wondering if we should consider running this less frequently? I think it would depend on what @elastic/search-extract-and-transform prefer, but the scenario I picture is something like:
- this job runs once a week, or maybe even once every two weeks
- if Trivy identifies any vulnerabilities, a message is sent to a Slack channel, or a GitHub issue is created
let's see what folks think 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- if Trivy identifies any vulnerabilities, a message is sent to a Slack channel, or a GitHub issue is created
I think these notifications could potentially be a duplicate of existing Snyk issues detected on the official container image. What's the value added by these new notifications?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is Snyk going to scan the images produced by these CI jobs?
I think this PR is doing 3 things, but maybe doesn't need to.
- switches our
Dockerfile
to use chainguard (has customer value) - Adds CI to validate the
Dockerfile
works (has customer value) - Adds CI to fail if a vuln is found (internal value, with indirect customer value)
I think (3) could probably be done separately, and reuse whatever machinery we use to scan other artifacts, especially since (1) and (2) will significantly cut down on false-positives that we'd have if we tried to do (3) on its own today.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The CI jobs in this PR do not push images to the docker registry, instead it relies on Trivy for vulnerability scanning, which runs within the context of the pipeline.
The alternative approach is to push the images to an internal namespace on the docker registry and request for these to be added to Snyk. This would result in duplicate reports though as we're already scanning the docker.elastic.co/integrations/elastic-connectors
with Snyk built from the Dockerfile.wolfi image which is technically the exact same base OS build.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the value added by these new notifications?
...
which is technically the exact same base OS build.
These are really good points that I hadn't considered. Given that with this PR we are:
- switching the
Dockerfile
to become distroless, and making the resultant image considerably more secure - making the
Dockerfile
essentially the same as theDockerfile.wofli
, the latter of which produces the images that are pushed and scanned by Snyk - adding nothing in the Dockerfile except the bare minimum that we need to run connectors (assuming we get rid of
git
andmake
👍)
... then I actually don't see much value in adding the Trivy step (let alone adding notifications based on its results). I wish I'd had these realizations sooner, apologies @kostasb.
What do you all think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kostasb thanks, I like the idea of leaving the Trivy step there, but not having it send any notifications. But I'll defer to E&T folks (@seanstory @artem-shelkovnikov) to say what they prefer 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have no objection to leaving it (without notifications) but also I don't think a human will be manually looking at the outputs often. So if we're ok as treating it essentially as a source of debug logs that may or may not be examined, great.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand the following might be a big direction change for this PR, but considering the feedback above, I think it might be worthy to explore: what if, instead of adding a new pipeline (with informative value only), we add the Trivy scan step to the existing pipeline? I think the informative value would be higher for developers working in PRs: the Trivy scan would be part of their feedback loop, and not something relegated to a different pipeline.
Similarly to what we did for ent-search, I would run the Trivy scan step in PRs to main
, and on merge on main
and other current release branches. What do you all think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Merging the extensible Dockerfile build/test/scan steps into the main pipeline makes sense to me, we can proceed this way if the team agrees.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I pushed a commit that merges the extensible Dockerfiles pipeline into the main pipeline (easy to revert if the team @seanstory @artem-shelkovnikov has a different opinion after all).
As a bonus point, this also enables testing as part of this PR (the new dedicated pipeline wouldn't be picked up until after merged).
Since in this PR we are building a Dockerfile on a different base image ( For reference, here is a list of packages which, among others (like bash and curl), are available in the wolfi python-3.11 image used for the production image builds. Do we want this list of packages added to the wolfi-base image? Are connectors requiring some, or all, of them? @elastic/search-extract-and-transform |
I'm not sure if we need the packages mentioned, we really only need dependencies for 3.11 python + |
84d8123
to
cdfd17b
Compare
Thanks, then I'll stick with the bare minimum packages and we can iterate if needed. I added both amd64 and arm64 image builds, mainly for the purpose of smoke testing (with Do we need some sort of release note for the new base image? It could be a "breaking" change for those users who build their images on top of our Dockerfiles. |
25f6298
to
a1b50bd
Compare
Update re. the packages included with the base image vs the python one that we do docker image builds on: @oli-g suggested to check whether installing |
Just to check - |
Yes, we've confirmed that this is a publicly available image that can be downloaded by unauthenticated (not logged in to any registry) docker instances. |
We can mention it in our release notes indeed. I don't expect a lot of customers to build customer docker images and likely it's just gonna work for them, but it would not hurt to mention that we've updated our docker images recently to make them more secure with potential incompatible changes that the customers will have to fix |
Thank you, I added a release note in the issue description. |
0e15a7a
to
c9a5e46
Compare
The pipeline steps have been tested successfully. The OSS ( As discussed Trivy scans are included for informative and logging purposes but won't cause the pipeline to fail, since we're getting alerts on equivalent images from Snyk. I am planning to tag this against 8.18.0 without backporting , because I consider it to be too big of a change for a patch release and we want to include a clear release note with it. Let me know if you have any objections or comments. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great work 🚀
I'll let E&T folks review and give the go-ahead on merging
thank you for doing this!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think as a user it would be clearer to read oss
instead of extensible
: what do you all think? I see other Elastic products doing something similar, see here for example.
I also left a couple of comments related to the following: I'm proposing to not build, test and scan the artifacts produced from Dockerfile.ftest
, as they're too similar to the artifacts produced from Dockerfile
.
.buildkite/pipeline.yml
Outdated
# ---- | ||
# Extensible Dockerfile.ftest build, tests and vunlerability scan on amd64 | ||
# ---- | ||
- label: "Building amd64 Docker image from extensible Dockerfile.ftest" | ||
agents: | ||
provider: aws | ||
instanceType: m6i.xlarge | ||
imagePrefix: ci-amazonlinux-2 | ||
env: | ||
ARCHITECTURE: "amd64" | ||
DOCKERFILE_PATH: "Dockerfile.ftest" | ||
DOCKER_IMAGE_NAME: "docker.elastic.co/ci-agent-images/elastic-connectors-extensible-dockerfile-ftest" | ||
DOCKER_ARTIFACT_KEY: "elastic-connectors-extensible-dockerfile-ftest" | ||
command: ".buildkite/publish/build-docker.sh" | ||
key: "build_extensible_dockerfile_ftest_image_amd64" | ||
artifact_paths: ".artifacts/*.tar.gz" | ||
- label: "Testing amd64 image built from extensible Dockerfile.ftest" | ||
agents: | ||
provider: aws | ||
instanceType: m6i.xlarge | ||
imagePrefix: ci-amazonlinux-2 | ||
env: | ||
ARCHITECTURE: "amd64" | ||
DOCKERFILE_PATH: "Dockerfile.ftest" | ||
DOCKER_IMAGE_NAME: "docker.elastic.co/ci-agent-images/elastic-connectors-extensible-dockerfile-ftest" | ||
DOCKER_ARTIFACT_KEY: "elastic-connectors-extensible-dockerfile-ftest" | ||
depends_on: "build_extensible_dockerfile_ftest_image_amd64" | ||
key: "test_extensible_dockerfile_ftest_image_amd64" | ||
commands: | ||
- "mkdir -p .artifacts" | ||
- buildkite-agent artifact download '.artifacts/*.tar.gz*' .artifacts/ --step build_extensible_dockerfile_ftest_image_amd64 | ||
- ".buildkite/publish/test-docker.sh" | ||
- label: "Trivy Scan amd64 extensible Dockerfile.ftest image" | ||
timeout_in_minutes: 10 | ||
depends_on: | ||
- test_extensible_dockerfile_ftest_image_amd64 | ||
key: "trivy-scan-amd64-extensible-dockerfile-ftest-image" | ||
agents: | ||
provider: k8s | ||
image: "docker.elastic.co/ci-agent-images/trivy:latest" | ||
command: |- | ||
mkdir -p .artifacts | ||
buildkite-agent artifact download '.artifacts/*.tar.gz*' .artifacts/ --step build_extensible_dockerfile_ftest_image_amd64 | ||
trivy --version | ||
env | grep TRIVY | ||
find .artifacts -type f -name '*.tar.gz*' -exec trivy image --quiet --input {} \; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we are not going to get a lot of value from these steps, given that Dockerfile.ftest
is almost equivalent to Dockerfile
.
So I'm wondering: what if instead (maybe in a different PR, as a follow-up) we delete the two .ftest
Dockerfiles and we try to live with only two Dockerfiles instead of four (Dockerfile
and Dockerfile.wolfi
), and we move the RUN .venv/bin/pip install -r requirements/ftest.txt
command out of the Dockerfiles, where we actually run those specs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense, I'll remove the ftest builds/tests/scans from this pipeline and follow up with an issue about replacing/removing the ftest dockerfiles.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Created an issue for it: https://github.com/elastic/search-developer-productivity/issues/3611
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@oli-g I just wanted to bring up an issue I foresee with this suggestion:
In this current PR here we remove make and then run the image as nonroot, which doesn't allow installing packages beyond what's already available.
We won't be able to run the functional test steps on top of this image as these require make. We'd need to build another container image layer with this Dockerfile as base, to re-install make in order to run the functional test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, I missed this "little detail"... thank you! I guess we'll have to find a different approach, or leave things as they are for now. We'll discuss it in the new ticket.
d39e558
to
bfd0ef8
Compare
I agree and adapted the PR to remove ftest from the pipeline and refer to the Dockerfile as OSS instead of extensible. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this work!
Let me know if any objections with merging this PR towards 8.18.0 without backporting to previous minor releases so that it doesn't show up in a patch release. I consider this a potential breaking change for some users, as reflected by the release note in issue description. If no concerns until then, I plan to merge on Monday, Jan 20. |
https://github.com/elastic/search-developer-productivity/issues/3547
Description
The goal of this PR is to:
This is based on @acrewdson 's draft. It uses
wolfi-base
images and adds buildkite pipelines to build, test and scan the resulting docker images using trivy.A notification method for vulnerability reports from Trivy is TBD.
Checklists
Pre-Review Checklist
config.yml.example
)v7.13.2
,v7.14.0
,v8.0.0
)Changes Requiring Extra Attention
Release Note
The OSS Dockerfile provided in the connectors repo has been updated to use a different base image for improved security. Users building custom docker images based on this Dockerfile may have to review their configuration for compatibility with the new base image.